34 research outputs found

    Are probabilistic spiking neural networks suitable for reservoir computing?

    Get PDF
    This study employs networks of stochastic spiking neurons as reservoirs for liquid state machines (LSM). We experimentally investigate the separation property of these reservoirs and show their ability to generalize classes of input signals. Similar to traditional LSM, probabilistic LSM (pLSM) have the separation property enabling them to distinguish between different classes of input stimuli. Furthermore, our results indicate some potential advantages of non-deterministic LSM by improving upon the separation ability of the liquid. Three non-deterministic neural models are considered and for each of them several parameter configurations are explored. We demonstrate some of the characteristics of pLSM and compare them to their deterministic counterparts. pLSM offer more flexibility due to the probabilistic parameters resulting in a better performance for some values of these parameters

    Modelling the effect of genes on the dynamics of probabilistic spiking neural networks for computational neurogenetic modelling

    Get PDF
    Computational neuro-genetic models (CNGM) combine two dynamic models – a gene regulatory network (GRN) model at a lower level, and a spiking neural network (SNN) model at a higher level to model the dynamic interaction between genes and spiking patterns of activity under certain conditions. The paper demonstrates that it is possible to model and trace over time the effect of a gene on the total spiking behavior of the SNN when the gene controls a parameter of a stochastic spiking neuron model used to build the SNN. Such CNGM can be potentially used to study neurodegenerative diseases or develop CNGM for cognitive robotics.

    Incremental learning algorithm for spike pattern classification

    Full text link
    In a previous work (Mohemmed et al.), the authors proposed a supervised learning algorithm to train a spiking neuron to associate input/output spike patterns. In this paper, the association learning rule is applied in training a single layer of spiking neurons to classify multiclass spike patterns whereby the neurons are trained to recognize an input spike pattern by emitting a predetermined spike train. The training is performed in incremental fashion, i.e. the synaptic weights are adjusted after each presentation of a training pattern. The individual neurons are trained independently from other neurons and on patterns from a single class. A spike train comparison criterion is used to decode the output spike trains into class labels. The results of the simulation experiments on a synthetic dataset of spike patterns show a high efficiency in solving the considered classification task

    An investigation in the correlation between Ayurvedic body-constitution and food-taste preference

    Get PDF

    Particle swarm optimization for Bluetooth scatternet formation

    No full text
    In this paper, we present and evaluate the performance of a new particle swarm optimization (PSO) based scatternet formation protocol for single hop Bluetooth networks. A scatternet is described as a network of piconets. Each piconet consists of a master and maximum of seven slaves, interconnected by bridge nodes. The scatternet formation algorithm applies a discrete particle swarm optimization (DPSO) technique to determine the role of each node and to find the best combination of masters, slaves and bridges in Bluetooth network. The formed scatternets have desired properties, that include obtaining optimum number of piconets for the formed scatternet, each piconet has maximum number of eight nodes, reducing end-user delay, minimizing overhead traffic and simplicity of implementing the proposed algorith

    Modelling the effect of genes on the dynamics of probabilistic spiking neural networks for computational neurogenetic modelling

    No full text
    Computational neuro-genetic models (CNGM) combine two dynamic models – a gene regulatory network (GRN) model at a lower level, and a spiking neural network (SNN) model at a higher level to model the dynamic interaction between genes and spiking patterns of activity under certain conditions. The paper demonstrates that it is possible to model and trace over time the effect of a gene on the total spiking behavior of the SNN when the gene controls a parameter of a stochastic spiking neuron model used to build the SNN. Such CNGM can be potentially used to study neurodegenerative diseases or develop CNGM for cognitive robotics.

    Are probabilistic spiking neural networks suitable for reservoir computing?

    No full text
    This study employs networks of stochastic spiking neurons as reservoirs for liquid state machines (LSM). We experimentally investigate the separation property of these reservoirs and show their ability to generalize classes of input signals. Similar to traditional LSM, probabilistic LSM (pLSM) have the separation property enabling them to distinguish between different classes of input stimuli. Furthermore, our results indicate some potential advantages of non-deterministic LSM by improving upon the separation ability of the liquid. Three non-deterministic neural models are considered and for each of them several parameter configurations are explored. We demonstrate some of the characteristics of pLSM and compare them to their deterministic counterparts. pLSM offer more flexibility due to the probabilistic parameters resulting in a better performance for some values of these parameters

    SPAN: spike pattern association neuron for learning spatio-temporal sequences

    Full text link
    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow–Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed

    Method for Training a Spiking Neuron to Associate Input-Output Spike Trains

    Full text link
    We propose a novel supervised learning rule allowing the training of a precise input-output behavior to a spiking neuron. A single neuron can be trained to associate (map) different output spike trains to different multiple input spike trains. Spike trains are transformed into continuous functions through appropriate kernels and then Delta rule is applied. The main advantage of the method is its algorithmic simplicity promoting its straightforward application to building spiking neural networks (SNN) for engineering problems. We experimentally demonstrate on a synthetic benchmark problem the suitability of the method for spatio-temporal classification. The obtained results show promising efficiency and precision of the proposed method
    corecore